New! Sign up for our free email newsletter.
Science News
from research organizations

Do we trust artificial intelligence agents to mediate conflict? Not entirely

New study says we'll listen to virtual agents except when goings get tough

Date:
October 16, 2019
Source:
University of Southern California
Summary:
We may listen to facts from Siri or Alexa, or directions from Google Maps or Waze, but would we let a virtual agent enabled by artificial intelligence help mediate conflict among team members? A new study says not just yet.
Share:
FULL STORY

We may listen to facts from Siri or Alexa, or directions from Google Maps or Waze, but would we let a virtual agent enabled by artificial intelligence help mediate conflict among team members? A new study says not just yet.

Researchers from USC and the University of Denver created a simulation in which a three-person team was supported by a virtual agent avatar on screen in a mission that was designed to ensure failure and elicit conflict. The study was designed to look at virtual agents as potential mediators to improve team collaboration during conflict mediation.

Confess to them? Yes. But in the heat of the moment, will we listen to virtual agents?

While some of researchers (Gale Lucas and Jonathan Gratch of the USC Viterbi School Engineering and the USC Institute for Creative Technologies who contributed to this study), had previously found that one-on-one human interactions with a virtual agent therapist yielded more confessions, in this study "Conflict Mediation in Human-Machine Teaming: Using a Virtual Agent to Support Mission Planning and Debriefing," team members were less likely to engage with a male virtual agent named "Chris" when conflict arose.

Participating members of the team did not physically accost the device (as we have seen humans attack robots in viral social media posts), but rather were less engaged and less likely to listen to the virtual agent's input once failure ensued and conflict arose among team members.

The study was conducted in a military academy environment in which 27 scenarios were engineered to test how the team that included a virtual agent would react to failure and the ensuring conflict. The virtual agent was not ignored by any means. The study found that the teams did respond socially to the virtual agent during the planning of the mission they were assigned (nodding, smiling and recognizing the virtual agent 's input by thanking it) but the longer the exercise progressed, their engagement with the virtual agent decreased. The participants did not entirely blame the virtual agent for their failure.

"Team cohesion when accomplishing complex tasks together is a highly complex and important factor," says lead author, Kerstin Haring, an assistant professor of computer science at the University of Denver.

"Our results show that virtual agents and potentially social robots might be a good conflict mediator in all kinds of teams. It will be very interesting to find out the interventions and social responses to ultimately seamlessly integrate virtual agents in human teams to make them perform better."

Study co-author, Gale Lucas, Research Assistant Professor of Computer Science at USC, and a researcher at the Institute for Creative Technologies, adds that some feedback from study participants indicates that they perceived virtual agents to be neutral and unbiased. She would like to continue the work to see if virtual agents can be applied "to help us make better decisions" and press "what it takes to have us trust virtual agents."

While this study was conducted in a military academy with particular structures, the researchers are hoping to develop this project to improve team processes in all sorts of work environments.

"Conflict Mediation in Human-Machine Teaming: Using a Virtual Agent to Support Mission Planning and Debriefing" by Kerstin S. Haring, Jessica Tobias, Justin Waligora, Elizabeth Phillips, Nathan L. Tenhundfeld, Gale Lucas, Jonathan Gratch, Ewart J. de Visser, and Chad C. Tossell will be presented at the RO-MAN, the 28th IEEE International Conference on Robot & Human Interactive Communication on October 15th in New Delhi, India.


Story Source:

Materials provided by University of Southern California. Note: Content may be edited for style and length.


Cite This Page:

University of Southern California. "Do we trust artificial intelligence agents to mediate conflict? Not entirely." ScienceDaily. ScienceDaily, 16 October 2019. <www.sciencedaily.com/releases/2019/10/191016094909.htm>.
University of Southern California. (2019, October 16). Do we trust artificial intelligence agents to mediate conflict? Not entirely. ScienceDaily. Retrieved October 31, 2024 from www.sciencedaily.com/releases/2019/10/191016094909.htm
University of Southern California. "Do we trust artificial intelligence agents to mediate conflict? Not entirely." ScienceDaily. www.sciencedaily.com/releases/2019/10/191016094909.htm (accessed October 31, 2024).

Explore More

from ScienceDaily

RELATED STORIES